767 research outputs found
Recommended from our members
Guidelines for analysis and reporting of clinical trials in oncology.
When analyzing and reporting the results of clinical trials, investigators should follow a simple approach. The purpose of a trial is to estimate an effect or treatment difference, which if present would have clinical utility when treating new patients. Procedures or methods that do not facilitate precisely and impartially estimating and reporting the treatment effect are likely to mislead investigators. Most often in clinical trials, investigators are interested in estimates of risk ratios (specifically odds or hazard ratios) between the treatment groups or levels of a prognostic factor. These simple ideas suggest that the most useful results from clinical trials will be estimated risk ratios and their confidence limits. Especially in cancer, where disease progression, recurrence, and death are common events following treatment, estimates of risk difference are very relevant. Hypothesis tests and associated P-values, although often (or exclusively) reported, are of lesser utility because they do not fully summarize the data. These recommendations may be seen by some investigators to be contrary to accepted practice. It is true that they are somewhat contrary to common practice but their general acceptance is evident in many journals and presentations by clinical trial methodologists. Despite some disagreement among statisticians regarding the need for adjustment of analyses for imbalanced prognostic factors, it is helpful to see if treatment effects change after accounting for imbalances. When this occurs, it may be of clinical interest. Although we discourage analyses that exclude any patients who meet the eligibility criteria, some circumstances will require that this be done (e.g., when a patient refuses to participate after randomization). Investigators should report, and emphasize as primary, those analyses that include all eligible patients. It is our hope and belief that analysis and reporting of trial results along the guidelines suggested here will result in impartial and useful information for journal readers
Positive words carry less information than negative words
We show that the frequency of word use is not only determined by the word
length \cite{Zipf1935} and the average information content
\cite{Piantadosi2011}, but also by its emotional content. We have analyzed
three established lexica of affective word usage in English, German, and
Spanish, to verify that these lexica have a neutral, unbiased, emotional
content. Taking into account the frequency of word usage, we find that words
with a positive emotional content are more frequently used. This lends support
to Pollyanna hypothesis \cite{Boucher1969} that there should be a positive bias
in human expression. We also find that negative words contain more information
than positive words, as the informativeness of a word increases uniformly with
its valence decrease. Our findings support earlier conjectures about (i) the
relation between word frequency and information content, and (ii) the impact of
positive emotions on communication and social links.Comment: 16 pages, 3 figures, 3 table
Statistical Laws Governing Fluctuations in Word Use from Word Birth to Word Death
We analyze the dynamic properties of 10^7 words recorded in English, Spanish
and Hebrew over the period 1800--2008 in order to gain insight into the
coevolution of language and culture. We report language independent patterns
useful as benchmarks for theoretical models of language evolution. A
significantly decreasing (increasing) trend in the birth (death) rate of words
indicates a recent shift in the selection laws governing word use. For new
words, we observe a peak in the growth-rate fluctuations around 40 years after
introduction, consistent with the typical entry time into standard dictionaries
and the human generational timescale. Pronounced changes in the dynamics of
language during periods of war shows that word correlations, occurring across
time and between words, are largely influenced by coevolutionary social,
technological, and political factors. We quantify cultural memory by analyzing
the long-term correlations in the use of individual words using detrended
fluctuation analysis.Comment: Version 1: 31 pages, 17 figures, 3 tables. Version 2 is streamlined,
eliminates substantial material and incorporates referee comments: 19 pages,
14 figures, 3 table
Experimental designs for phase I and phase I/II dose-finding studies
We review the rationale behind the statistical design of dose-finding studies as used in phase I and phase I/II clinical trials. We underline what the objectives of such dose-finding studies should be and why the widely used standard design fails to meet any of these objectives. The standard design is a ‘memoryless' design and we discuss how this impacts on practical behaviour. Designs introduced over the last two decades can be viewed as designs with memory and we discuss how these designs are superior to memoryless designs. By superior we mean that they require less patients overall, less patients to attain the maximum tolerated dose (MTD), and concentrate a higher percentage of patients at and near to the MTD. We reanalyse some recently published studies in order to provide support to our contention that markedly better results could have been achieved had a design with memory been used instead of a memoryless design
dfpk : An R-package for Bayesian dose-finding designs using Pharmacokinetics (PK) for phase I clinical trials
Background and objective
Dose-finding, aiming at finding the maximum tolerated dose, and pharmacokinetics studies are the first in human studies in the development process of a new pharmacological treatment. In the literature, to date only few attempts have been made to combine pharmacokinetics and dose-finding and to our knowledge no software implementation is generally available. In previous papers, we proposed several Bayesian adaptive pharmacokinetics-based dose-finding designs in small populations. The objective of this work is to implement these dose-finding methods in an R package, called dfpk.
Methods
All methods were developed in a sequential Bayesian setting and Bayesian parameter estimation is carried out using the rstan package. All available pharmacokinetics and toxicity data are used to suggest the dose of the next cohort with a constraint regarding the probability of toxicity. Stopping rules are also considered for each method. The ggplot2 package is used to create summary plots of toxicities or concentration curves.
Results
For all implemented methods, dfpk provides a function (nextDose) to estimate the probability of efficacy and to suggest the dose to give to the next cohort, and a function to run trial simulations to design a trial (nsim). The sim.data function generates at each dose the toxicity value related to a pharmacokinetic measure of exposure, the AUC, with an underlying pharmacokinetic one compartmental model with linear absorption. It is included as an example since similar data-frames can be generated directly by the user and passed to nsim.
Conclusion
The developed user-friendly R package dfpk, available on the CRAN repository, supports the design of innovative dose-finding studies using PK information
Aggregation Bias: A Proposal to Raise Awareness Regarding Inclusion in Visual Analytics
Data is a powerful tool to make informed decisions. They can be
used to design products, to segment the market, and to design policies. However,
trusting so much in data can have its drawbacks. Sometimes a set of
indicators can conceal the reality behind them, leading to biased decisions that
could be very harmful to underrepresented individuals, for example. It is challenging
to ensure unbiased decision-making processes because people have their
own beliefs and characteristics and be unaware of them. However, visual tools
can assist decision-making processes and raise awareness regarding potential
data issues. This work describes a proposal to fight biases related to aggregated
data by detecting issues during visual analysis and highlighting them, trying to
avoid drawing inaccurate conclusions
Prospective SPECT-CT organ dosimetry-driven radiation-absorbed dose escalation using the In-111 (111In)/yttrium 90 (90Y) ibritumomab tiuxetan (Zevalin ®) theranostic pair in patients with lymphoma at myeloablative dose levels
PURPOSE: We prospectively evaluated the feasibility of SPECT-CT/planar organ dosimetry-based radiation dose escalation radioimmunotherapy in patients with recurrent non-Hodgkin\u27s lymphoma using the theranostic pair of
METHODS: 24 patients with CD20-positive relapsed or refractory rituximab-sensitive, low-grade, mantle cell, or diffuse large-cell NHL, with normal organ function, platelet counts \u3e 75,000/mm
RESULTS: Patient-specific hybrid SPECT/CT + planar organ dosimetry was feasible in all 18 cases and used to determine the patient-specific therapeutic dose and guide dose escalation (26.8 ± 7.3 MBq/kg (mean), 26.3 MBq/kg (median) of
CONCLUSIONS: Patient-specific outpatien
Sample size requirements for separating out the effects of combination treatments: Randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis
<p>Abstract</p> <p>Background</p> <p>In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design.</p> <p>Methods</p> <p>We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two.</p> <p>Results</p> <p>In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions.</p> <p>Conclusions</p> <p>Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the combination trial.</p> <p>Trial registration</p> <p>Current Controlled Trials <a href="http://www.controlled-trials.com/ISRCTN61649292">ISRCTN61649292</a></p
Changes in monthly unemployment rates may predict changes in the number of psychiatric presentations to emergency services in South Australia
BACKGROUND To determine the extent to which variations in monthly Mental Health Emergency Department (MHED) presentations in South Australian Public Hospitals are associated with the Australian Bureau of Statistics (ABS) monthly unemployment rates. METHODS Times series modelling of relationships between monthly MHED presentations to South Australian Public Hospitals derived from the Integrated South Australian Activity Collection (ISAAC) data base and the ABS monthly unemployment rates in South Australia between January 2004–June 2011. RESULTS Time series modelling using monthly unemployment rates from ABS as a predictor variable explains 69 % of the variation in monthly MHED presentations across public hospitals in South Australia. Thirty-two percent of the variation in current month’s male MHED presentations can be predicted by using the 2 months’ prior male unemployment rate. Over 63 % of the variation in monthly female MHED presentations can be predicted by either male or female prior monthly unemployment rates. CONCLUSIONS The findings of this study highlight that even with the relatively favourable economic conditions, small shifts in monthly unemployment rates can predict variations in monthly MHED presentations, particularly for women. Monthly ABS unemployment rates may be a useful metric for predicting demand for emergency mental health services.Niranjan Bidargaddi, Tarun Bastiampillai, Geoffrey Schrader, Robert Adams, Cynthia Piantadosi, Jörg Strobel, Graeme Tucker, and Stephen Alliso
HIV-1 Superinfection in Women Broadens and Strengthens the Neutralizing Antibody Response
Identifying naturally-occurring neutralizing antibodies (NAb) that are cross-reactive against all global subtypes of HIV-1 is an important step toward the development of a vaccine. Establishing the host and viral determinants for eliciting such broadly NAbs is also critical for immunogen design. NAb breadth has previously been shown to be positively associated with viral diversity. Therefore, we hypothesized that superinfected individuals develop a broad NAb response as a result of increased antigenic stimulation by two distinct viruses. To test this hypothesis, plasma samples from 12 superinfected women each assigned to three singly infected women were tested against a panel of eight viruses representing four different HIV-1 subtypes at matched time points post-superinfection (∼5 years post-initial infection). Here we show superinfected individuals develop significantly broader NAb responses post-superinfection when compared to singly infected individuals (RR = 1.68, CI: 1.23–2.30, p = 0.001). This was true even after controlling for NAb breadth developed prior to superinfection, contemporaneous CD4+ T cell count and viral load. Similarly, both unadjusted and adjusted analyses showed significantly greater potency in superinfected cases compared to controls. Notably, two superinfected individuals were able to neutralize variants from four different subtypes at plasma dilutions >1∶300, suggesting that their NAbs exhibit elite activity. Cross-subtype breadth was detected within a year of superinfection in both of these individuals, which was within 1.5 years of their initial infection. These data suggest that sequential infections lead to augmentation of the NAb response, a process that may provide insight into potential mechanisms that contribute to the development of antibody breadth. Therefore, a successful vaccination strategy that mimics superinfection may lead to the development of broad NAbs in immunized individuals
- …